摘要 :
Soil scientists require cost-effective methods to make accurate regional predictions of soil organic carbon (SOC) content. We assess the suitability of airborne radiometric data and digital elevation data as covariates to improve ...
展开
Soil scientists require cost-effective methods to make accurate regional predictions of soil organic carbon (SOC) content. We assess the suitability of airborne radiometric data and digital elevation data as covariates to improve the precision of predictions of SOC from an intensive survey in Northern Ireland. Radiometric data (K band) and, to a lesser extent, altitude are shown to increase the precision of SOC predictions when they are included in linear mixed models of SOC variation. However thestatistical distribution of SOC in Northern Ireland is bimodal and therefore unsuitable for geostatistical analysis unless the two peaks can be accounted for by the fixed effects in the linear mixed models. The upper peak in the distribution is due to areas of peat soils. This problem may be partly countered if soil maps are used to classify areas of Northern Ireland according to their expected SOC content and then different models are fitted to each of these classes. Here we divide the soil in NorthernIreland into three classes, namely mineral, organo-mineral and peat. This leads to a further increase in the precision of SOC predictions and the median square error is 2.2 %(2). However a substantial number of our observations appear to be mis-classified and therefore the mean squared error in the predictions is larger (30.6 %(2)) since it is dominated by large errors due to mis-classification. Further improvement in SOC prediction may therefore be possible if better delineation between areas of largeSOC (peat) and small SOC (non-peat) could be achieved.
收起
摘要 :
A rain gauge network of 10 tipping bucket rain gauges on the Mid-Atlantic coast of the United States has been in continuous operation since June 1, 1986. Rain-rate distributions and estimated slant path fade distributions at 20 an...
展开
A rain gauge network of 10 tipping bucket rain gauges on the Mid-Atlantic coast of the United States has been in continuous operation since June 1, 1986. Rain-rate distributions and estimated slant path fade distributions at 20 and 30 GHz covering the first five-year period have been derived from the gauge network measurements and published data of Goldhirsh, Krichevsky and Gebo (see ibid., vol.40, no.11, p.1408, 1992). In this article, we present rain-rate time duration statistics. The conversion of rain-rate duration statistics derived from in situ measurements to slant path fade duration statistics is complicated because of the vertical and lateral inhomogeneity of the rain. A benchmark set of fade duration statistics at 20 and 30 GHz for a vertical path is derived from the rain-rate duration statistics employing Crane's (1980) global model. These results may be used by investigators for comparison with and/or conversion to slant path fade duration statistics. Such statistics are important for better assessing optimal coding procedures over defined bandwidths.
收起
摘要 :
As far back as the late 1970s, the impact of affordable, high-speed computers on the theory and practice of modern statistics was recognized by Efron (1979, 1982). As a result, the bootstrap and other computer-intensive statistica...
展开
As far back as the late 1970s, the impact of affordable, high-speed computers on the theory and practice of modern statistics was recognized by Efron (1979, 1982). As a result, the bootstrap and other computer-intensive statistical methods (such as subsampling and the jackknife) have been developed extensively since that time and now constitute very powerful (and intuitive) tools to do statistics with. This article provides a readable, self-contained introduction to the bootstrap and jackknife methodology for statistical inference; in particular, the focus is on the derivation of confidence intervals in general situations. A guide to the available bibliography on bootstrap methods is also offered.
收起
摘要 :
We investigate the problem of characterizing the distribution of independent identically distributed random variables X_1,...,X_m (general distribution) by the distribution of linear statistics and statistics of maximum with posit...
展开
We investigate the problem of characterizing the distribution of independent identically distributed random variables X_1,...,X_m (general distribution) by the distribution of linear statistics and statistics of maximum with positive coefficients. Necessary and sufficient conditions are found under which such a characterization takes place.
收起
摘要 :
It is shown in this article that a technique that was previously introduced to approximate the density functions of certain continuous random variables can be successfully applied to discrete distributions. The probability mass fu...
展开
It is shown in this article that a technique that was previously introduced to approximate the density functions of certain continuous random variables can be successfully applied to discrete distributions. The probability mass function approximants are expressed as the product of an appropriate base density function and a polynomial adjustment. A degree selection criterion that is based on the integrated squared difference between approximants of successive degrees is being proposed. The methodology, which is conceptually simple and easily implementable, is applied to a binomial random variable, the largest order statistic in a binomial sample, a Poisson distribution, and two rank-sum test statistics.
收起
摘要 :
A Weibull statistical analysis of breakdown voltages of thin polyethylene-insulated power cable slices is performed on large populations. Computation of confidence intervals implies that the statistically correct description is a ...
展开
A Weibull statistical analysis of breakdown voltages of thin polyethylene-insulated power cable slices is performed on large populations. Computation of confidence intervals implies that the statistically correct description is a three-parameter Weibull distribution, i.e., with a nonzero location parameter. It is shown that a data set described by a two-parameter Weibull distribution contains additional statistical dispersion factors which may or may not yield information on the insulation itself. In other words, a zero location parameter, always results from inhomogeneities in the sampling. Comparative testing is used to discriminate between the various sources of inhomogeneity. When it is obtained under carefully controlled experimental conditions, the location parameter value can be considered a true quality factor of the system under test. The statistical analysis of data collected in routine breakdown tests provides a very sensitive tool to investigate small changes in electrical insulation when performed on extensive data sets.
收起
摘要 :
In this paper, we present a repetitive sampling method to construct control charts using exponentially weighted moving averages (EWMA) and double exponentially weighted moving averages (DEWMA) to monitor shift in the process. For ...
展开
In this paper, we present a repetitive sampling method to construct control charts using exponentially weighted moving averages (EWMA) and double exponentially weighted moving averages (DEWMA) to monitor shift in the process. For non-normal processes, t-distribution with various degrees of freedom (i.e. df = 4, 10, 20, 40, 50) is used as symmetric distribution, gamma distribution with unit scale parameter and various shape parameters (i.e. 0.5, 1, 2, 3, 4) is used as positively skewed distribution and Weibull distribution with unit scale parameter and various shape parameters (i.e. 10 and 20) is used as negatively skewed distribution. We use Monte Carlo simulations to check whether the process is out of control. We use average run length as a tool to find the ability of proposed control charts to identify a shift earlier in a process, as compared to other control charts currently used to monitor the same type of process. The proposed control charts are applied to two real datasets.
收起
摘要 :
Context Interpreting spatial autocorrelation is complicated by differences in data type, spatial conformation, and contiguity definitions. Though lacking consistent meaning, Moran's I is commonly reported, compared, and interprete...
展开
Context Interpreting spatial autocorrelation is complicated by differences in data type, spatial conformation, and contiguity definitions. Though lacking consistent meaning, Moran's I is commonly reported, compared, and interpreted based on conceptual ideals. To provide consistent, logical, and intuitive meaning and enable broader synthetic work, a new approach to I is needed. Objectives We sought to standardize I and true it to conceptual ideals and existing intuition regarding regular correlations. We also wished to test performance of transformed metrics over a diversity of designed and empirical datasets. Methods We developed two means to rectify I. Both fit null distributions from data permutation to a target frame of [- 1, 0, 1], followed by projection of original I into this conformation. One method used three-point registration employing the distribution median and select tail percentiles. The other directly projected all I based on theory or cumulative frequencies reflecting the distribution of regular correlations. Repeatability and sensitivity of results were examined for varied permutation replication and framing parameter choices. Empirical and designed datasets were used to compare rectified to traditional metrics. Results Both rectification methods improved distributional characteristics of I. Three-point registration produced overly broad distributions with discontinuous peaks. Continuous projection fit the distribution for regular correlations precisely. Diverse case studies demonstrated failings of I and the clarity gained by rectification. Conclusions Rectified I enabled meaningful comparisons of spatial patterns for diverse data and landscape conditions. Preserving the intuitive value of Moran's I while providing a theoretically sound and consistent approach for standardizing its values should foster sustained use.
收起
摘要 :
Very often, the service level of a single-period newsboy-type product is set at such a low level that: (i) stockouts occur in the majority of the periods, and (ii) a large right-hand side of the empirical demand distribution is ne...
展开
Very often, the service level of a single-period newsboy-type product is set at such a low level that: (i) stockouts occur in the majority of the periods, and (ii) a large right-hand side of the empirical demand distribution is never observable. This paper reports a practical approach for estimating the periodic-demand distribution of such a product. The approach has three components: (i) using the non-parametric ‘product limit’ method to estimate the fractiles of the observable left-hand side of the empirical distribution; (ii) using a subjective approach and an ‘extrapolation of hourly sales’ approach to ‘fill in’ the missing right-hand side of the empirical distribution; (iii) fitting the estimates obtained in the preceding two components to a Tocher curve — which can handle the diversity of shapes of a realistic demand distribution and is also computationally very convenient for subsequent calculations for production/inventory decisions. The entire approach is shown to be simpler but more powerful than existing alternatives for the problem.
收起